23 research outputs found

    Maximizing the number of users in an interactive video-on-demand system

    Get PDF
    Video prefetching is a technique that has been proposed for the transmission of variable-bit-rate (VBR) videos over packet-switched networks. The objective of these protocols is to prefetch future frames at the customers' set-top box (STB) during light load periods. Experimental results have shown that video prefetching is very effective and it achieves much higher network utilization (and potentially larger number of simultaneous connections) than the traditional video smoothing schemes. The previously proposed prefetching algorithms, however, can only be efficiently implemented when there is one centralized server. In a distributed environment there is a large degradation in their performance. In this paper we introduce a new scheme that utilizes smoothing along with prefetching, to overcome the problem of distributed prefetching. We will show that our scheme performs almost as well as the centralized prefetching protocol even though it is implemented in a distributed environment. In addition, we will introduce a call admission control algorithm for a fully interactive Video-on-Demand (VoD) system that utilizes this concept of distributed video prefetching. Using the theory of effective bandwidths, we will develop an admission control algorithm for new requests, based on the user's viewing behavior and the required Quality of Service (QoS).published_or_final_versio

    Efficient resource management for end-to-end QoS guarantees in DiffServ networks

    Get PDF
    The Differentiated Services (DiffServ) architecture has been proposed as a scalable solution for delivering end-to-end Quality of Service (QoS) guarantees over the Internet. While the scalability of the data plane emerges from the definition of only a small number of different service classes, the issue of a scalable control plane is still an open research problem. The initial proposal was to use a centralized agent, called Bandwidth Broker (BB), to manage the resources within each DiffServ domain and make local admission control decisions. In this paper, we propose an alternative distributed approach, where the local admission decisions are made independently at the edge routers of each domain. We will show, through simulation results, that this distributed approach can manage the network resources very efficiently, leading to lower bandwidth blocking rates when compared to traditional shortest path admission control. Moreover, its simplicity and distributed implementation make it a very scalable solution for resource management in DiffServ networks.published_or_final_versio

    Quality of service support in differentiated services packet networks

    Get PDF
    During the past few years, new types of Internet applications which require performance beyond the best-effort service that is provided by the current Internet have emerged. These applications include the transmission of voice and video, which require a fixed end-to-end delay bound in order for the end-user to perceive an acceptable level of service quality. The Differentiated Services (Diffserv) model has been proposed recently to enhance the traditional best-effort service, and provide certain Quality of Serviee (QoS) guarantees to these applications. Its current definition, however, does not allow for a high level of flexibility or assurance and, therefore, it can not be widely deployed. In this paper, we introduce a new protocol for a Diffserv architecture which provides a simple and efficient solution to the above problem. It is a complete protocol, in the sense that it deals with the issues of packet scheduling, admission control, and congestion control. We will show, through experimental results, that our proposed protocol can improve the flexibility and assurance provided by current solutions, while maintaining a high level of network utilization.published_or_final_versio

    Dynamic organization schemes for cooperative proxy caching

    Get PDF
    In a generic cooperative caching architecture, web proxies form a mesh network. When a proxy cannot satisfy a request, it forwards the request to the other nodes of the mesh. Since a local cache cannot fulfill the majority of the arriving requests (typical values of the local hit ratio are about 30-50%), the volume of queries diverted to neighboring nodes can substantially grow and may consume considerable amount of system resources. A proxy does not need to cooperate with every node of the mesh due to the following reasons: (i) the traffic characteristics may be highly diverse; (ii) the contents of some nodes may extensively overlap; (iii) the inter-node distance might be too large. Furthermore, organizing N proxies in a mesh topology introduces scalability problems, since the number of queries is of the order of N/sup 2/. Therefore, restricting the number of neighbors for each proxy to k < N - 1 will likely lead to a balanced trade-off between query overhead and hit ratio, provided cooperation is done among useful neighbors. For a number of reasons the selection of useful neighbors is not efficient. An obvious reason is that web access patterns change dynamically. Furthermore, availability of proxies is not always globally known. This paper proposes a set of algorithms that enable proxies to independently explore the network and choose the k most beneficial (according to local criteria) neighbors in a dynamic fashion. The simulation experiments illustrate that the proposed dynamic neighbor reconfiguration schemes significantly reduce the overhead incurred by the mesh topology while yielding higher hit ratios compared to the static approach.published_or_final_versio

    A general framework for searching in distributed data repositories

    Get PDF
    This paper proposes a general framework for searching large distributed repositories. Examples of such repositories include sites with music/video content, distributed digital libraries, distributed caching systems, etc. The framework is based on the concept of neighborhood; each client keeps a list of the most beneficial sites according to past experience, which are visited first when the client searches for some particular content. Exploration methods continuously update the neighborhoods in order to follow changes in access patterns. Depending on the application, several variations of search and exploration processes are proposed. Experimental evaluation demonstrates the benefits of the framework in different scenarios.published_or_final_versio

    Microcities: A Platform Based on Microclouds for Neighborhood Services

    Get PDF
    International audienceThe current datacenter-centralized architecture limits the cloud to the location of the datacenters, generally far from the user. This architecture collides with the latest trend of ubiquity of Cloud computing. Distance leads to increased utilization of the broadband Wide Area Network and poor user experience, especially for interactive applications. A semi-decentralized approach can provide a better Quality of Experience (QoE) in large urban populations in mobile cloud networks, by confining local traffic near the user while maintaining centralized characteristics, running on the users and network devices. In this paper, we propose a novel semi-decentralized cloud architecture based on microclouds. Microclouds are dynamically created and allow users to contribute resources from their computers, mobile and network devices to the cloud. Microclouds provide a dynamic and scalable system without an extra investment in infrastructure. We also provide a description of a realistic mobile cloud use case, and its adaptation to microclouds

    A scalable architecture for end-to-end QoS provisioning

    No full text
    The Differentiated Services (DiffServ) architecture has been proposed by the Internet Engineering Task Force as a scalable solution for providing end-to-end Quality of Service (QoS) guarantees over the Internet. While the scalability of the data plane emerges from the definition of only a small number of different service classes, the issue of a scalable control plane is still an open research problem. The initial proposal was to use a centralized agent, called Bandwidth Broker, to manage the resources within each DiffServ domain and make local admission control decisions. In this article, we propose an alternative decentralized approach, which increases significantly the scalability of both the data and control planes. We discuss in detail all the different aspects of the architecture, and indicate how to provide end-to-end QoS support for both unicast and multicast flows. Furthermore, we introduce a simple traffic engineering mechanism, which enables the more efficient utilization of the network resources. © 2004 Elsevier B.V. All rights reserved.link_to_subscribed_fulltex

    PCloud: A distributed system for practical PIR

    No full text
    Computational Private Information Retrieval (cPIR) protocols allow a client to retrieve one bit from a database, without the server inferring any information about the queried bit. These protocols are too costly in practice because they invoke complex arithmetic operations for every bit of the database. In this paper, we present pCloud, a distributed system that constitutes the first attempt toward practical cPIR. Our approach assumes a disk-based architecture that retrieves one page with a single query. Using a striping technique, we distribute the database to a number of cooperative peers, and leverage their computational resources to process cPIR queries in parallel. We implemented pCloud on the PlanetLab network, and experimented extensively with several system parameters. Our results indicate that pCloud reduces considerably the query response time compared to the traditional client/server model, and has a very low communication overhead. Additionally, it scales well with an increasing number of peers, achieving a linear speedup. © 2011 IEEE

    Increasing the performance of CDNs using replication and caching: A hybrid approach

    No full text
    Caching and replication have emerged as the two primary techniques for reducing the delay experienced by end users when downloading web pages. Even though these techniques may benefit from each other, previous research work tends to focus on either one of them separately. In this paper we investigate the potential performance gains by using a CDN server both as a replicator and as a proxy server. We assume a common storage space for both techniques, and develop an analytical model that characterizes caching performance under various system parameters. Based on the models predictions, we can reason whether it is beneficial to reduce the caching space in order to allocate extra replicas. The resulting problem of finding which object replicas should be created where, given that any free space will be used for caching, is NP-complete. Therefore, we propose a hybrid heuristic algorithm (based on the greedy paradigm), in order to solve the combined replica placement and storage allocation problem. Our simulation results indicate that a simple LRU caching scheme can considerably improve the response time of HTTP requests, when utilized over a replication-based infrastructure

    Continuous spatial authentication

    No full text
    Recent advances in wireless communications and positioning devices have generated a tremendous amount of interest in the continuous monitoring of spatial queries. However, such applications can incur a heavy burden on the data owner (DO), due to very frequent location updates. Database outsourcing is a viable solution, whereby the DO delegates its database functionality to a service provider (SP) that has the infrastructure and resources to handle the high workload. In this framework, authenticated query processing enables the clients to verify the correctness of the query results that are returned by the SP. In addition to correctness, the dynamic nature of the monitored data requires the provision for temporal completeness, i.e., the clients must be able to verify that there are no missing results in between data updates. This paper constitutes the first work that deals with the authentication of continuous spatial queries, focusing on ranges. We first introduce a baseline solution (BSL) that achieves correctness and temporal completeness, but incurs false transmissions; that is, the SP has to notify clients whenever there is a data update, even if it does not affect their results. Then, we propose CSA, a mechanism that minimizes the processing and transmission overhead through an elaborate indexing scheme and a virtual caching mechanism. Finally, we derive analytical models to optimize the performance of our methods, and evaluate their effectiveness through extensive experiments. © 2009 Springer Berlin Heidelberg
    corecore